Wednesday, January 21, 2026

The Rise of Airt

I began this blog post in February, 2025. It is now January, 2026.

Let's go back in time 11 months...

Joe from Feb 17, 2025: I woke up this morning to dive into my work in progress and saw that Microsoft 365 had added (without asking me for permission) something called Copilot to MS Word.

I researched what it was and had a WTF moment.

Apparently Microsoft has enabled a Large Language Model Artificial Intelligence to "assist" with writing.

It can summarize your document, rewrite it, refine it, answer questions about your content, and even write new content.

Copilot immediately began "suggesting" what my next word, sentence, and paragraph could be, based on what it read in my WIP.

Yikes!

After figuring out how to turn off Copilot (by, ironically, asking Grok, X's AI), I saw some chatter on Facebook about someone who self pubbed an ebook who had accidentally left in the AI prompts.

I had no sympathy for that author in the face of the negative social media shitstorm that ensued. An author using AI is not an author. They're a prompter.

Amazon may have anticipated this happening, because several years ago KDP added a checkmark when publishing which asked, "Did you use AI tools in creating texts, images, and/or translations in your book?"

This question is actually nuanced.

What if I hire a cover artist who used a prompt on iStock to help create an image? 

What if I use a proofreader that uses AI to help spot typos, grammar errors, or continuity problems?

What if I used Microsoft's own spellcheck? Is that AI?

This is a big can of worms that I'm trying to wrap my head around.

When the digital picture revolution began years ago, some were afraid. Photographers were afraid that manipulated pictures would devalue their art. Camera companies were afraid that people would stop buying and developing film.

The camera companies were correct; film is no longer the norm. 

But photographers wound up with more tools to make their pics better.

This seems... different.

Yesterday I saw an ad for an app that takes a sample of your voice and then "sings" in your voice to any song. I also saw another app that will take a still picture and "bring it to life" by animating it.

In the future we'll all be able to play guitar, act in movies, design a building blueprint, write legal documents, code a videogame, and do many other things that once required years of training and education.

Learning a skill will soon be replaced by learning how to prompt AI to reach the desired output, no actual talent required.

We'll likely be able to see holograms of the Beatles perform in concert, to support a new AI Beatles album written and performed by AI in the style of the Beatles.

So... what happens to fiction writers?

Fifteen years ago I wrote about the self-publishing revolution, and those who feared a tsunami of crap.

As I predicted, the tsunami of crap never happened. The cream tends to rise to the top, and the crap sinks.

Is this just another tsunami of crap? Will readers be able to separate wheat from chaff? Will the real writers still sell well while fake AI prompt writers tank from bad reviews? Will certain writing brands that are free of AI have integrity and correspondingly better sales?

I have big doubts.

What if the AI prompt writing is as good, or better, that what real writers can do, in the same way AI will be able to diagnose and treat disease better than actual doctors and analyze legal contracts better than actual lawyers?

I've become dubious of everything these days in an effort to suss out what's generated. Is the picture I'm viewing real or AI? Is that video a deepfake? Are these social media replies and emails from human beings or AI?

Large Language Models train on human writing, and according to three leaked data sets--Book3, LibGen, and Anna's Archive, I've had dozens of books pirated by LLMs, who trained on them.

They also trained on books written by my friends and peers. So I gave Grok this prompt:

Write a 500 word short story about a border collie in the style of (a writer I know).

Grok took less than five seconds:

Here’s a 500-word thriller story about a border collie, written in the taut, gripping style of (a writer), known for fast-paced narratives and edge-of-your-seat tension:

The wind howled through the pines, carrying the scent of blood and pine needles. Tessa crouched low, her black-and-white coat blending with the shadows of the Montana forest. Her ears twitched, sharp as blades, catching every snap of a twig, every rustle of leaves. She wasn’t just any border collie—she was a hunter, trained to herd more than sheep.

Tonight, her prey was human.

The man had stumbled into her world three hours ago, reeking of whiskey and desperation. Tessa’s owner, Mara, lay still in the cabin, her breathing shallow, a crimson stain blooming across her chest.

The man had done that—his knife flashing in the firelight before he’d fled into the dark. Tessa hadn’t hesitated. She’d bolted after him, her instincts screaming louder than any command.

Now, the forest was a chessboard, and Tessa was the queen. She tracked his clumsy path—broken branches, smeared mud, a discarded boot caught in a briar.

He thought he could outrun her.

He was wrong.

Her paws barely touched the ground, silent as death, her eyes glinting with a predator’s focus. She could hear him now, his ragged gasps cutting through the night.

He was close.

The man crashed through a clearing, his flashlight beam jerking wildly. Tessa circled wide, staying downwind. She saw him—tall, unshaven, the knife still clutched in his shaking hand. Blood stained the blade.

Mara’s blood.

A growl rumbled low in Tessa’s throat, but she held it back. Not yet. Timing was everything.

He tripped, sprawling into the dirt, cursing as the flashlight rolled away. Tessa tensed, muscles coiling like springs.

He scrambled up, eyes darting, sensing something—her.

“Stay back, you damn mutt!” he yelled, slashing the air with the knife.

He didn’t see her. Not yet.

She darted left, then right, a blur of motion, herding him like a wayward lamb. He swung the blade, missed, stumbled again. Panic made him sloppy. Tessa’s heart pounded, but her mind was ice-cold. She knew this game—push, feint, exhaust.

He’d drop soon. He had to.

A ravine loomed ahead, its edge masked by undergrowth. Tessa saw it; he didn’t. She barked once, sharp and commanding, driving him backward. His heel caught a root, and he flailed, arms pinwheeling.

The knife flew from his grip as he went over, a scream tearing from his throat. It cut off with a wet thud far below.

Tessa padded to the edge, peering down. The man lay crumpled, motionless, the knife glinting beside him. Her ears flicked. Silence returned, heavy and final.

She turned, her mission done, and raced back through the trees toward the cabin. Mara needed her now—needed the collie who’d just become more than a pet, more than a herder.

Tessa was a guardian, a shadow with teeth, and she’d protect her own to the last breath.

Joe sez:
Well, shit. That's pretty good.

As good as a human being?

Maybe not. But for a few years I judged the Writer's Digest Short Story Contest, and Grok's story would have placed in the top 5.

For my last book (FREE - A CODENAME CHANDLER NOVEL) I asked Grok, "If I fire a rocket from Plum Island New York heading West at 17kmph, how long will it take to get into low earth orbit, 1500km, over the Pacific?"

Grok answered in two seconds, with math to support the answer. I didn't use any of what Grok told me in my book, but that gave me the data I needed to know where my satellite would end up.

Was I cheating? Have I been cheating using Wikipedia and Google all of these years? Prior to that, was I cheating by doing my research in a library, reading books written by others?

Are digital photographers cheating when they use Photoshop to make their photos better?

Are graphic artists cheating by using software that adds a light source and natural shading?

This is becoming tricky. Beware of blanket statements.

Such as: "Blankets keep me warm."

Heh.

I make up my jokes. I don't steal them.

But I'm sure I must have used a joke that I thought I made up, but actually heard years before and forgot about. Not intentionally. But I'm sure it has happened.

I'm pretty sure I've heard the joke, "Don't buy blankets; there is a cover charge."

Or did I make just it up? Unknown.  

After all, I'm using a language (English) in a culture (the US), and I had nothing to do with the creation of either, but I am informed by my upbringing, using words I didn't invent and ideas that predate me.

I will not start using AI writing prompts. I'm a purist. I value the challenge and fun of my mind vs the blank page.

But should I get angry at those who are using AI? Is AI actually any different than brainstorming with a friend and taking their suggestions? If so, why? Because your friend has a pulse?

AI is becoming more prevalent. Unless we return to typewriters, it's gonna creep in. I could have read four books about rocket science and done my own math to launch my fictional satellite. It was a lot easier, and less of a PITA, to ask AI.

AI, love it or hate it for art, is helpful.

Here's one that gets me; I would love to have my books available in every language. But I don't have hundreds of thousands of dollars to pay people to translate my 60 novels into 20 different languages. Translations can cost over two grand per book.

Should I use AI? Would I be taking away a human being's job? Would I be reaching an audience we could never afford to reach? Both?

What if I hire a human translator, and they use AI? And what if they don't disclose that to me?

What if a reader from Thailand buys WHISKEY SOUR and has AI translate it into Thai? Shouldn't I embrace that?

This is coming soon. It may already be here.

Eventually we're going to have a movie where The Marx Brothers meet The Three Stooges. We're going to see Eastwood's Man With No Name have a gunfight with Bronson's Harmonica (OUATITW). The visuals will be seamless. And as AI improves, the writing will be seamless.

Ask any AI to write a story in the style of JA Konrath, and they do an okay job. Eventually they'll do a perfect job.

This scares me, as I said. But there are opportunities here.

People who learn how to use AI to write fiction might have the same advantage that traditional photographers had when Adobe first invented Photoshop.

Do we use this advantage?

Do we get mad at others who use this advantage?

The horse has left the barn. Now what are we going to do about it?

Joe sez: For some reason, I never published the above blog post. Probably because I didn't know where to stand, or what side to take.

Cut to Joe in 2026:

Things have come a long way since the old days of 2025.

Let's call human art "art" and AI art "airt". Pronounced like "air".

Airt is stolen valor. My previous blog post was about the Bartz v Anthropic lawsuit, and I'm on the side of writers.

That said, it is difficult to distinguish the difference between art and airt, and soon it will be impossible.

Certainly airt can make you feel the same things art makes you feel. The only difference is prior knowledge of the artist or airtist.

Fiction writing is just a voluntary hijack of the limbic system. A drug can make you happy or sad or giggly.

Same with a story. No matter who writes it.

Chemical emotion happens, and AI is getting really good at it.

So I again prompted AI to write a 500 word story in the style of a writer I know, combining a dog and the Many Worlds Interpretation of Quantum Mechanics.

Here's what Grok did:

The dog’s name was Max, a border collie with eyes like polished obsidian and a mind that never quite settled.

His human, Elena, was a post-doc in quantum foundations at the university, the kind who talked to him in full sentences about wave functions and decoherence as they walked the wooded trail behind the house every evening.

Max didn’t understand the words, not exactly. But he understood patterns. He understood that when Elena said “superposition,” her voice lifted the way it did when she tossed his frisbee high into the air—something suspended, not yet decided.

He understood “observation” because every time she looked at him, the world narrowed to the two of them, the rest collapsing into background.

One October night, the air smelled of frost and wet leaves. Elena clipped the leash to his collar and they stepped onto the trail. Halfway along the loop, a squirrel darted across the path.

Max lunged. The leash snapped taut.

Elena stumbled, her foot catching a root. She fell hard, head striking a rock with a sound Max would never forget—wet, final.

He barked, frantic. He licked her face. No response. Blood pooled beneath her hair, dark against the moonlight.

Then something shifted.

Max felt it first in his paws: a vibration, like the low hum of the refrigerator back home but deeper, inside his bones. The forest forked.

Not the path--the world.

One branch: Elena dead on the ground, cooling, sirens eventually coming, Max taken to a shelter, the long dimming of days.

Another branch: Elena groaning, sitting up, dazed but alive, pressing a hand to her scalp and laughing weakly at his worried whine.

Both branches felt equally real. Both were real.

He stood between them, trembling. The squirrel chittered from a branch overhead, indifferent. Max looked at Elena’s body—no, at both versions--and understood, with the crystalline clarity only animals achieve when language falls away, that every choice, every quantum event, split the universe like light through a prism.

He had always lived in many worlds. Every time he chased a ball and missed, there was a Max who caught it cleanly. Every time he hesitated at thunder, there was a Max who stood fearless. He had simply never noticed the seams before.

Now he saw them everywhere.

He nosed the Elena who was breathing. She blinked, confused, but reached for him. In the other branch, the dead Elena stayed still.

Max felt the divergence widen, the dead branch receding like a dream upon waking. He whined, not from fear but recognition. The living Elena clipped the broken leash back on his collar and limped home, murmuring apologies.

That night, while she slept with ice on her head, Max sat by the window watching moonlight fracture across the glass. He understood now why smells layered so richly—one world’s rabbit trail overlaying another’s, infinite possibilities braided into each breath.

He also understood loss. In countless branches, Elena never stood up. In some, he was the one struck by a car years earlier, or never adopted from the pound. Grief and joy coexisted, superimposed, until observed.

Max rested his head on his paws. Tomorrow they would walk again. Somewhere, a squirrel would cross the path. Somewhere, Elena would fall.

But here, in this branch, she would rise.

And Max, guardian of the seams, would choose—quietly, instinctively—which world to follow her into.

Joe sez: I've reread that story six times.

I cry each time.

I cry because it's great. And I cry because it's great.

But is it art? 

No. But it's airt.

Airt is the result of code packets replying to prompts. It is not an autonomous creative endeavor. It is not self-aware. Grok doesn't even know English in the traditional sense.

But it sure knows how to string together data.

And people are aware of this.

San Diego Comic-Con recently recently posted this about its Art Show:

Material created by Artificial Intelligence (AI) either partially or wholly, is not allowed in the art show. If there are questions, the Art Show Coordinator will be the sole judge of acceptability.

Not allowing airt is based on fear. Maybe the fear is warranted, but the lines are blurry and getting blurrier.

Photoshop did not ruin photography. Airt will not destroy art.

But we are in need of agreed-upon definitions, rules, and laws regarding this new technology, and if the laws are fear-driven rather than pragmatic that leads to slippery slopes.

Grok defines art as: The expression or application of human creative skill and imagination, typically in visual, auditory, or performative forms, producing works that are appreciated primarily for their beauty, emotional power, intellectual stimulation, or ability to provoke thought.

By that definition, AI cannot create art. It is not human, and cannot produce work without prompts.

AI is not creative. It is a copy machine/blender.

Those airt dog stories hit every mark they needed to. AI understands story better than I ever will (because it stole that ability from me and thousands of other professional fiction writers). Those stories made me think, and feel, and there wasn't a word wasted.

But those stories aren't art.

How do you define art? Is it all in the eye of the beholder?

If we examined two drawings, one human made and one AI made, we might not be able to tell which is which. The AI one might be better on any level we judge art.

But I contend it cannot be defined as art if it isn't the willful creation of a human being.

That said, there are strong arguments against my position. 

I have begun using AI to proofread. Not to write, or to rewrite, but to check for errors.

And Grok finds a lot of errors.

I don't ask Grok for suggestions. That's my world, my jam. I don't need or want help.

But I did ask Grok to translate some of my books into different languages. Which it did. Whole books into French and German, and it only took me a few hours going chapter by chapter, adjusting prompts for consistency.

Then I hired editors/proofreaders in those languages to make sure they held up to my English versions.

It saves the foreign proofreaders time, and me money. It will allow me to eventually get 60+ novels into 20+ languages.

I was musing about this eleven months ago. Now it's happening.

I currently have a human cover artist. But I can see getting airt for my covers in the future.

Hell, I can see using airt to create an entire movie that I wrote.

Sacrilege? I don't think so. If I had a movie studio and crew at my disposal, I'd use it. Since I don't, why shouldn't I let AI have a crack at it?

I love movies and film. I'd love to have my work adapted. I love writing screenplays. Here's a thriller called THE SITE that I've had on my hard drive since 2006. Read it. It's fun.

But Hollywood has gatekeepers. It's an exclusive party, and most of us are not invited.

I'd love for my books to be movies. And some day, LLMs will be able to do that.

Should I say no? Hold out, fingers crossed, hoping someone buys an options and gets a story of mine before the camera?

Hint: I was one of the first legacy published authors to reject the gatekeeping system and self-publish.

Movies will go the way of books. You don't need a million dollar company when you can do it yourself.

What do you think? If you've read my books and want to see movies and TV shows, would you care if they were airt if they were indistinguishable from human-made?

I'm going to end this long post not with a question, or a prediction.

Instead, I'll end it with my English Bulldog. Dr. Belgium, riding the kaiju Gamera.


Pretty cool. And something I never could have done, even with years of practice.

We're entering a new era. We can be afraid of it. We can bemoan it. We can hate the ones who use it.

But it's here. We have to deal with it.




Sunday, November 09, 2025

Anthropic and the Future of Copyright

Over two years ago I wrote a blog post about AI. Specifically about Large Language Models that have been trained on pirated novels, and the resulting class action lawsuit, Bartz v Anthropic.

Since writing that post my views have changed.

Part of my argument can be summed up as: If AI isn't stealing verbatim, and ideas cannot be copyrighted, where is the copyright infringement?

I still mostly agree with that statement. Apparently, so did the court. Here's an AI summary:

Bartz v. Anthropic (3:24-cv-05417, N.D. Cal.) was a class action copyright infringement lawsuit filed in August 2024 by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson against AI company Anthropic PBC. The plaintiffs alleged that Anthropic used pirated copies of their books, obtained from "shadow libraries" like LibGen and PiLiMi, to train its Claude large language models (LLMs) without permission. The case tested the application of fair use doctrine under Section 107 of the Copyright Act to AI training data.

U.S. District Judge William Alsup granted partial summary judgment in favor of Anthropic on fair use grounds for certain activities, while denying it for others.

  • Fair Use for LLM Training: The court ruled that using copyrighted books to train LLMs was "spectacularly" transformative and constituted fair use, as the process analyzes statistical relationships in text to enable generation of new, original content without reproducing the originals. This was analogized to human learning, emphasizing that copyright does not protect ideas, methods, or concepts. No market harm was found, as there was no evidence of infringing outputs from Claude mimicking or reproducing the plaintiffs' works.
  • Fair Use for Digitization of Purchased Books: Anthropic's scanning of lawfully purchased print books (involving destructive processes like removing bindings) to create internal digital copies was deemed fair use, as it was a transformative format shift for storage and searchability without distribution or increasing the number of copies. The court found no market harm, distinguishing it from unauthorized duplication.
  • No Fair Use for Pirated Copies: However, the acquisition and retention of pirated digital copies to build a central library was not fair use and constituted infringement, even if later used for training. This was non-transformative, displaced the market for authorized sales, and was unnecessary given lawful alternatives. Judge Alsup expressed skepticism that subsequent fair use could justify initial piracy, noting lawful acquisition as essential.
Joe sez: Seems like Judge Alsup mostly agreed with me; ideas can't be copyrighted, and books absorbed and used to generate new ideas are transformative.
But Judge Alsup said that training AI on pirated books is not fair use when there are lawful alternatives for acquisition other than stealing.
Currently there is the option for authors to join this class action and receive $3000 for each work of theirs that was pirated and used.

As I understand this, it's compensation for the stealing, but not for the actual use of copyrighted IPs, or what that use entails.
This reminds me of the Napster days, where record companies sued kids who downloaded songs for free and the court ordered them to pay vast sums of money for each infringement.
Punitive damages for stealing.
But what's happening here is something very different.
What every LLM has done by training on the IP of fiction writers is incorporate those millions of words into their own programming.

This isn't about using my work to generate transformative ideas.

This is about using my work to create a program that can generate transformative ideas.

Big difference there.

LLMs aren't basing works on my IP.

LLMs are able to make money because my IP helped to build them.

Just as cinnamon is an ingredient used to make cinnamon rolls, intellectual property was used as an ingredient to improve AI.

You can't take the cinnamon out of the roll after it has been baked.

While the roll may be a transformative work that uses cinnamon, the recipe cannot exist without the cinnamon. 

And if the cinnamon is stolen, every time a roll is sold, the original owner of the cinnamon should be paid.

You cannot have AI in its current state without accounting for the IP it stole to get to its current state. You cannot get that genie back in the bottle. 

This isn't about punitive damages for stealing a work. 
This is about a trillion dollar industry that cannot exist without having trained on the IPs this industry has stolen.
No cinnamon, no cinnamon rolls to sell.

No training on IP, no AI to sell.

The fact that the IP is stolen seems inconsequential. Legally paying $7.99 for a paperback, scanning it, and then training an LLM on it goes beyond fair use for a few reasons.
First, because of scale. I learned to write mysteries by reading and imitating (without stealing from) writers like Robert B Parker, Ed McBain, John D MacDonald. But I am only one person who can only read so many books.
LLMs have devoured millions of books, and have taken what they learned to make money.

This cannot be covered by the current legal definitions of copyright and fair use. New definitions must be created. 

Second, because of speed. I have a limit of how quickly I can read and write. AI can read and write millions of words in a fraction of the time. A machine with an unlimited appetite for devouring and learning from IP is unprecedented, and fair use cannot apply. It's simply not fair at all.

Again, current legal definitions must be refined. As technology improves, laws must keep up.

Third, because of how AI learns, it needs massive datasets to ingest and process in order to spot patterns. Once patterns are learned, LLMs are able to predict what comes next in a sequence.

Books are narratives that creatively answer the "what comes next" question. AI needs cogent, modern, professional books to learn from. Books that have been expertly written, vetted, edited, composed.
Books are the cinnamon in the cinnamon roll. 
AI didn't create its own version or facsimile of cinnamon. It isn't taking the "idea" of cinnamon. It outright stole the cinnamon, and continues to use that stolen cinnamon, and cannot be separated from the stolen cinnamon.
LLMs have stolen more than 200 of my books, and they've learned from them. They've gotten so good at imitating my writing style that fans will soon be able to have their favorite AI create books that are identical to mine in tone and quality. 

Because my IP has been absorbed by AI, every question asked by a paying user gets an answer that has a tiny bit of me in it; my style, my jokes, my tone. 

This isn't analogous to downloading an mp3 file of a Metallica song without paying. Stealing a song from Metallica doesn't allow the thief to instantly write a song that sounds exactly like Metallica. 
This is more like The Stepford Wives, and writers are non-consensually training their own replacements. 

It's the stealing of brand secrets. It's the outright theft of the individually crafted blueprints of how thoughts and ideas are uniquely crafted. It's stealing our voices, and copying how our brains work, and being able to replicate not only our ideas but the unique expression and implementation of those ideas, without permission.

Permission is the key here. If you want to make a Virtual J.A. Konrath, surely that should only be done with my permission? If Hollywood can't steal Crispin Glover's face, why should LLMs be able to steal my life's work? 

These billion dollar companies are able to make billions because they illegally downloaded and absorbed millions of books and continually use that information to perform tasks. They cannot start over from scratch without this stolen data. My IP is now part of every LLM's programming that trained on my books. My trade secrets have been stolen.

Should writers be compensated? Absolutely.

If I were to license my IP--and all of the work and knowledge and skill that went into creating a body of work that constitutes several million published words--I would ask for consulting and usage fees much higher than $3k per book.

You want my land to build your empire? Pay me for the land. You can't build without my permission and then offer me pennies. Especially when your empire is worth billions.
So I don't agree with Judge Alsup that this is a copyright infringement only because of piracy. 

This is a copright infringement because my life's work--my brand--was taken by you without my permission and used to build your empire. Even if the LLMs each bought one copy of every book I've written, the infringement remains. Reading one of my books, whether you buy it or not, does not mean you have the right to use it any way you want to.

This reminds me of the 
Henrietta Lacks lawsuit. As summed up by AI:

Henrietta Lacks was an African American woman who died of cervical cancer in 1951 at Johns Hopkins Hospital in Baltimore, Maryland. During her treatment, doctors took samples of her tumor cells without her knowledge or consent, a common practice at the time, especially for Black patients under segregation-era policies. These cells, later named HeLa (from her name), were the first human cells to reproduce indefinitely in a lab, revolutionizing medical research. They've contributed to breakthroughs like the polio vaccine, cancer treatments, gene mapping, and COVID-19 vaccines, generating billions in profits for biotech companies. However, Lacks' family was unaware of this until the 1970s and received no compensation or recognition for decades.

Henrietta did not give informed consent for the use of her cells. The family sued. A settlement was reached.

The permission LLMs need to legally learn from an IP encompasses more than buying a copy.

If LLMs only use 
0.000006% of the overall training data from my books (that's an AI estimate) that will still result in an untold number of answers for paying users that benefit from my work when I gave no permission. I did not enter into a contract with these LLMs for this specific kind of use. I do not receive royalties every time my work is used by LLMs. 

This goes beyond current US Copyright Law.

The recent Bartz v. Anthropic settlement addressed claims related to works under registered copyright. However, this leaves a vast number of unprotected creators; hundreds of thousands of self-published works that were never formally registered but were nonetheless pirated and incorporated into training datasets such as LibGen, Book3, and the Pirate Library Mirror. 
These datasets have been utilized by virtually every major LLM, fueling the growth of multi-billion dollar AI empires at the expense--and without permission from--original creators.
In my own case, I have authored over 200 works that have been pirated and used in AI training, yet only a dozen or so were registered for copyright. This situation highlights a critical barrier: the requirement to overturn the Supreme Court's decision in Fourth Estate Public Benefit Corp. v. Wall-Street.com, LLC (2019), which mandates copyright registration as a prerequisite for filing infringement lawsuits.
I contend that this ruling is unconstitutional. 
Copyright protection is granted automatically upon creation, yet the registration process imposes lengthy and costly government hurdles that effectively deny access to justice. Fiction IP ownership can be readily proven through ISBNs, ASINs, and other verifiable records. I can demonstrate authorship of my books, their piracy, and their unauthorized use by LLMs. Denying the ability to litigate without registration amounts to the U.S. government restricting citizens' rights, creating unequal protection based on one's ability to pay fees or jump through bureaucratic hoops. 
In essence, I own intellectual property but am barred from defending it legally. 
The Fourth Estate decision violates multiple constitutional provisions, including:
The Intellectual Property Clause (Article I, Section 8, Clause 8), by undermining the promotion of science and useful arts;
The Petition Clause of the First Amendment, by restricting the right to seek redress;
Due Process under the Fifth Amendment, by denying fair legal recourse;
Equal Protection under the Fifth Amendment, by discriminating against those unable to afford registration;

The Ninth Amendment, on numerous grounds related to unenumerated rights;

The Seventh Amendment, by limiting access to civil jury trials; 
Article III, Section 2, by constraining judicial power over cases and controversies.
As a fiction writer whose sole income derives from my creative works, I am directly harmed by Big Tech's theft, yet I am unable to sue due to this archaic registration mandate. This case represents an opportunity to champion the rights of indie authors nationwide, potentially reshaping copyright law in the AI age.
But it gets worse for me.
Trade secrets require no copyright registration at all and are enforced through confidentiality and misappropriation laws, and LLMs have stolen and can reproduce my trade secrets. 
Ask an AI to write a book in the style of J.A. Konrath (me) and it can, using a process that relies on unauthorized use of protected material at multiple stages. Modern generative AI models, such as those for text generation, are trained on massive datasets that often include books like mine scraped from pirate sources without permission or licensing. 
This goes beyond ideas and plots and settings and characters. These LLMs have stolen--and can reproduce on demand--my style, my tone, my humor, my pacing; everything that makes my books unique to me and to my readers.
This training involves reproducing and analyzing my works to extract patterns. All LLMs train on pattern recognition and mimicry. This greatly differs from human imitation, where a writer might legally read my books and create original content inspired by my approach, by utilizing systematic, unauthorized data ingestion at massive scales.
This isn't analogous to being fired from an assembly line and replaced by a robot.
I am now forced to unfairly compete, which violates Section 5 of the Federal Trade Commission Act, with any user who asks AI to create a work in my style. It's like stealing my secret recipe and publishing it on the Internet. If a reader asks an AI to write a JA Konrath book, AI can do it as well as I can. Why should readers buy my books? Why buy the cow when LLMs are giving out free milk?

I used AI several times in writing this blog post. I also use AI to proofread my books, and AI is helping me translate my books into different languages.

LLMs are putting proofreaders and translators out of business. But these LLMs didn't steal from--or learn--from proofreaders or translators. They are doing the same job, and learned from the same set of rules. 

But LLMs could not write like J.A. Konrath or think like J.A. Konrath without stealing from J.A. Konrath because they had no meeting of the minds with J.A. Konrath on what they can incorporate from J.A. Konrath into their models. 

I currently pay $400 a year for AI.

Should AI be paying me royalties every time someone asks it a question and it answers using a bit of something it stole from me?

I think so. 

Should LLMs who have infringed on the IP of writers pay those writers a settlement for involuntarily helping program those LLMs?

I think so.

Should Fourth Estate Public Benefit Corp. v. Wall-Street.com, LLC be challenged in court?

I think so.

Should US Copyright Law be changed to incorporate new technology?

I think so.

Buying my books does not grant permission to build a company founded on my books.

AI estimates that the Top 20 LLMs have already generated 6 trillion responses.

Out of those 6,000,000,000,000 responses so far, if those LLMs trained on my 200+ books (based on the AI estimate that 0.000006% of the overall training data from my books helped form them), that means there have been as many as 36,000,000 responses made by LLMs that involved learning from my work. 

As of November 2025, my IP has perhaps assisted LLMs in crafting 36 million answers.

If I received 1/10th of a cent for each of these answers that can be attributed to my IP, I'd be owed $36k dollars by LLMs. So far. Plus a royalty situation going forward.

These numbers are staggering. And we're still in the early days of AI and LLMs.

It is not my fault you stole my land to build your empire. 

Should every writer of every IP used by LLMs in their training be compensated in this manner?

I think so.

What do you think?